A two-cut approach in the analytic center cutting plane method

نویسندگان

  • Jean-Louis Goffin
  • Jean-Philippe Vial
چکیده

We analyze the two cut generation scheme in the analytic center cutting plane method. We propose an optimal updating direction when the two cuts are central. The direction is optimal in the sense that it maximizes the product of the new slacks within the trust region de ned by Dikin's ellipsoid. We prove convergence in O (n2 "2 ) calls to the oracle and that the recovery of a new analytic center can be done in O(1) primal damped Newton steps. Keywords Primal Newton algorithm, Analytic center, Cutting Plane Method, Two cuts. This work has been completed with the support from the Fonds National Suisse de la Recherche Scienti que, grant 12-42503.94, from the Natural Sciences and Engineering Research Council of Canada, grant number OPG0004152 and from the FCAR of Quebec. GERAD/Faculty of Management, McGill University, 1001, Sherbrooke West, Montreal, Que., H3A 1G5, Canada. E-mail: [email protected]. LOGILAB/Management Studies, University of Geneva, 102, Bd Carl-Vogt, CH1211 Gen eve 4, Switzerland. E-mail: [email protected]. 1 1 Introduction The analytic center cutting plane (ACCPM) algorithm [4, 18] is an e cient algorithm in practice [2, 3]. The complexity of related algorithms was given in [1, 13], and subsequently in [5]. Extensions to deep cuts were given in [6] and to very deep cuts in [8]. The method studied in [8] corresponds to the practical implementation of ACCPM [9] with a single cut. In practice, it often occurs that the oracle in the cutting plane scheme generates multiple cuts. It is thus interesting to analyze the case where several cuts are introduced at a time. The papers [11, 19, 15] show that it is possible to handle several cuts at a time provided they are central [19] or moderately shallow [11]. Although those analyses show how one can recover feasibility after introducing multiple cuts, there is no clear argument as to the choice of a feasibility restoration direction. In this paper, we analyze the case of two cuts. We note that two cuts occur in the application to nondi erentiable optimization, where a new cutting plane and a new upper bound on the objective (or possibly a shift in this bound) is introduced at each step. We show that when the two cuts are central, there exist primal and dual directions which allow a best move towards primal and dual feasibility. However, contrary to the case of a single cut, it does not seem possible to generate a point from which one can obtain a new approximate center in O(1) iterations by taking Newton steps. Nor does is seem possible to use an argument using weighted potentials and weigted analytic centers. However an argument using the primal, dual and primal{dual potentials at this new optimal primal and dual point proves that O(1) damped Newton steps are enough to recover centrality. The updating direction depends on the cosine in the metric of Dikin's ellipsoid of the normals to the cuts. We explicit the role of this cosine in the formula that bounds the number of iterations, making it transparent that acute angles are favorable to convergence. However, this term does not change the general complexity result. We thus obtain that convergence occurs after O (n2 "2 ) calls to the oracle with O(1) primal damped Newton steps per call to the oracle. 2 Analytic center cutting plane method 2.1 Cutting plane The problem of interest is that of nding a point in a convex set C IRn. We make the following assumptions. 2 Assumption 2.1 The set C is convex, contains a ball of radius " > 0 and is contained in the cube 0 y e. Assumption 2.2 The set C is described by an oracle. That is, the oracle either con rms that y 2 C, or answers at least one cutting plane that contains C and does not contain y in its interior. A cut at y 62 C takes the form aT y aT y : If > 0, the cut is deep; if < 0, the cut is shallow; if = 0, the cut passes through y. Assumption 2.3 All the cutting planes generated have been scaled so that kak = 1 (wlog). A cutting plane algorithm constructs a sequence of query points fykg. The answers of the oracle to the queries, together with the cube 0 y e, de ne a polyhedral outer approximation FD = fy : AT y cg of C. Since A contains the identity matrix associated with the cube, A has full row rank. Therefore there is a one-to-one correspondence between points y 2 FD and the slack s = c AT y, leading to an equivalent de nition of FD as FD = fs 0 : AT y + s = cg: The number of columns in A is denoted asm and is equal to 2n plus the number of cutting planes generated until the kth iteration. The analytic center cutting plane method chooses as a query point an approximate analytic center of FD. 2.2 Analytic center The analytic center of FD is the unique point maximizing the dual potential 'D(s) = m Xi=1 log si with s = c AT y > 0. We formally introduce the optimization problem min 'D(s) : s = c AT y > 0 (1) 3 and the associated rst order optimality conditions xs = e; AT y + s = c; s > 0; Ax = 0; x > 0; where x is a vector in Rm. The notation xs indicates the Hadamard or componentwise product of the two vectors x and s. The analytic center can alternatively be de ned as the optimal solution of max f'P (x) : Ax = 0; x > 0g ; (2) where 'P (x) = cTx+ m Xi=1 logxi denotes the primal potential. One easily checks that problem (2) shares with (1) the same rst order optimality conditions. The duality relationship between 'P (x) and 'D(x) results from the simple inequality log t t 1; 8t > 0; with equality if and only if t = 1: (3) Indeed, let x 2 intFP and s 2 intFD. Apply (3) with t = xisi. By summing the resulting inequalities, one gets m Xi=1 log xi + m Xi=1 log si xT s m = cTx m; with equality if and only if xs = e. Therefore, 'P (x) + 'D(s) m; (4) with equality if and only if xs = e. Since the sum of the potentials plays a role in our convergence analysis, we nd it convenient to introduce formally the primal-dual potential 'PD(x; s) = 'P (x) + 'D(s): Also, let us remark that for any pair (x; s), x 2 intFP and s 2 intFD, cTx = xT s > 0: Finally, we de ne approximate centers by relaxing the condition xs = e in the rst order optimality conditions. Formally, any solution (x; s) of ke xsk < 1; (5) AT y + s = c; s > 0; (6) Ax = 0; x > 0: (7) de nes a pair of -approximate centers, or -centers in short. 4 3 Primal Newton method We present here a version of the primal Newton method that slightly di ers from the standard one as given in e.g. [7]. The primal Newton method is de ned with respect to a second order approximation of 'P (x). The Newton direction x is thus the solution of max 1 2 xTX2 x+ (x 1 c)T x : A x = 0 : Letting x = xp(x), we get p(x) = e xc+XAT (AX2AT ) 1AX2c; = e xs; where s = c AT y and y = (AX2AT ) 1AX2c: It is convenient to interpret xs as the projection of e xc on the null space of AX . Formally we write it xs = PAX (e xc) = e PAXXc: The right equality follows from PAXe = e since e is in the null space of AX as AXe = Ax = 0. Finally, p(x) can alternatively be written as p(x) = e xs; (8) where s = argmin ke xsk : s = c AT y : (9) Remark 3.1 Problem (2) is the classical primal barrier problem minfcTx m Xi=1 logxi : Ax = b; x > 0g with = 1 and b = 0. The Newton direction which we de ne by (8) and (9) is the same as for the barrier problem. In view of the equivalence of the primal problem (2) with the standard barrier problem, we can state some basic properties of the Newton direction. Lemma 3.1 (Quadratic convergence) Assume kp(x)k < 1 and let s be de ned as in (9). Then s is dual interior feasible. Besides, x+ = x+ xp(x) > 0 is interior feasible and p(x+) kp(x)k2 : 5 Proof The proof is standard. We repeat it for the sake of completeness. Since p(x) = e xs, it follows from kp(x)k that x+ > 0. Besides p(x+) e sx+ ; = ke s(x+ xp(x))k ; = ke sx(2e xs)k ; = (e sx)2 ; ke sxk2 = kp(x)k2 : As a direct corollary we can bound the potential at a -center. Corollary 3.2 Assume kp(x)k < 1. Let xc be the exact analytic center and denote 'cP = 'P (xc) the optimal value of the potential. Then, 'cP 'P (x) 'cP 2 1 2 : Proof Let us use the primal Newton method with starting point x. Then 'P (x + xp(x)) = cT (x+ xp(x)) + m Xi=1 logxi + m Xi=1 log(1 + pi(x)); = 'P (x) cT (xp(x)) + m Xi=1 log(1 + pi(x)): Note that m Xi=1 log(1 + pi(x)) m Xi=1 pi(x) = eT (e xs) = m xT s = m cTx: We also have cT (xp(x)) = (xc)T (PAX (e xc)); = (e xc)T (PAX (e xc))) + eTPAX(e xc); = kp(x)k2 + eT (e PAXxc); = kp(x)k2 +m cTx: In view of the above inequality and equations, one has 'P (x+ xp(x)) 'P (x) + kp(x)k2 : 6 Repeated use of the quadratic convergence lemma yields 'cP 'P (x) + 2 1 2 : The second property of the Newton direction deals with the general case, when kp(x)k is not necessarily small. Lemma 3.3 (Potential increase) Assume kp(x)k > 0, and let = 1 1+kp(x)k . The damped Newton step x( ) = x(e + p(x)) > 0 is primal feasible. Moreover 'P x( ) 'P (x) + log(1 + ): The primal algorithm computes an approximate analytic center as follows. Let x > 0, with A x = 0 be the initial point and let x > 0, Ax = 0, be the current iterate. Damped Newton steps are used until kp( x)k , and thus a -center x is reached. Theorem 3.4 Let s 2 F0 D be any point in the interior of the dual polytope. Let 0 < < 1 and 1 = log(1 + ). If the primal Newton algorithm starts at a feasible point x it will terminate with at a point x satisfying kp( x)k after at most 'P (x) 'D(s) 1 : Although we analyze in this paper a cutting plane algorithm that is based on the primal Newton algorithm, it is still worth mentioning that similar results |e.g., quadratic convergence and guaranteed potential increase| hold for the dual Newton method. Therefore, the equivalent of Corollary 3.2 holds for the dual. Since we shall need the bound on the dual potential, we state this result here as a additional lemma. Lemma 3.5 Let s 2 intFD and x 2 intFP . Assume ke xsk < 1. Let sc be the exact analytic center and denote 'cD = 'D(sc) the optimal value of the potential. Then, 'cD 'D(s) 'cD 2 1 2 : For a formal statement and proof of the above results, see [16] and [20]. 7 4 Adding two central cuts Assume that an approximate analytic center has been computed, i.e., ( x; y; s) is such that AT y + s = c; s > 0; A x = 0; x > 0; (10) ke x sk < 1; with y = (A X2AT ) 1A X2c: (11) The oracle is called at y and answers the two central cuts aTm+1y aTm+1 y; and aTm+2y aTm+2 y: Those inequalities are valid for all y 2 C. After adding the new constraints, the new polytope is ~ FD = ny : ~ AT y ~ co ; with ~ A = (A; am+1; am+2) and ~ c = 0@ c aTm+1 y aTm+2 y1A : In the sequel, we shall use the tilde to denote elements of the augmented system. 4.1 Recovering feasibility To achieve dual feasibility, one has to move the current point y within the feasible set FD. Dikin's ellipsoid with the primal scaling E := y : XAT (y y) 1 de nes a trust region around y. It satis es E FD. To restore feasibility, we look at points which lie along a certain direction. We suggest the use of the direction which maximizes the product of the two new slacks within the trust region de ned by Dikin's ellipsoid. Formally, the problem ismax8<: 2 Xj=1 log sm+j : sm+j = aTm+j y; j = 1; 2; XAT y 1 9=; : (12) 8 Theorem 4.1 The solution of problem (12) is y = 1 p2(1 + r12) (A X2AT ) 1b; where j =qaTm+j(A X2AT ) 1am+j ; j = 1; 2; b = am+1 1 + am+2 2 ; 12 = aTm+1(A X2AT ) 1am+2; and r12 = 12 1 2 : Proof To identify the solution of (12) we transform Dikin's ellipsoid by the a ne mapping: z = H 12 y, where H = A X2AT . The transformed problem is max8<: 2 Xj=1 log sm+j : sm+j = aTm+jH 1 2 z; j = 1; 2; k zk 1 9=; : By symmetry, the optimal direction is d = 2 Xj=1 H 1 2 am+j H 12 am+j = 2 Xj=1 H 1 2 am+j j = H 1 2 b; and the solution is z = (1 ) d kdk = (1 ) d pbTH 1b = (1 ) d p2(1 + r12) : The direction in the original space is y = H 1 2 z = 1 p2(1 + r12) (A X2AT ) 1b: A step y + y, with 0 1, lies in the inner ellipsoid E FD and is thus feasible. It follows that s( ) = s+ s = s AT y 0; 9 where s = 1 p2(1 + r12)AT (A X2AT ) 1b: The slacks corresponding to the new cuts are, for j = 1; 2, sm+j( ) = aTm+j y = 1 p2(1 + r12) j(1 + r12) = 1 p2 jp1 + r12: Therefore, the vector of new slacks is ~ s( ) = 0B@ s+ 1 p2(1+r12)AT (A X2AT ) 1b 1 p2 1p1 + r12 1 p2 2p1 + r12 1CA : It exhibits strict dual feasibility for the augmented problem for 0 < < 1. To motivate our choice of a suitable primal direction, we consider that the new pair ( x + x; s + s) should depart as little as possible from the centrality of ( x; s). This introduces the condition x s+ s x = 0. Since we compute analytic center via the primal Newton algorithm, we want to use the primal scaling X. Thus we replace s with x 1 in the condition and get the direction x = X2 s = 1 p2(1 + r12) X2AT (A X2AT ) 1b: Since A x = 1 p2(1 + r12) b; the point ~ x( ) = 0@ x001A+ 0B@ x 1 1p2(1+r12) 1 2p2(1+r12) 1CA (13) is primal feasible for the augmented problem. This implies that the new components of the primal variable are xm+j( ) = 1 jp2(1 + r12) ; j = 1; 2: As X s = (1 )p2(1 + r12), it follows that (1 ) 1 implies that ~ x( ) is positive. The linear term in the potential ~ 'P at ~ x( ) in the augmented problem is ~ cT ~ x( ) = cT ( x + x) + 2 Xj=1 1 jp2(1 + r12)aTm+j y = cT ( x + x) + 1 p2(1 + r12) bT y: 10 Replacing x by its expression, one may write, using (11), ~ cT ~ x( ) = cT x+ 1 p2(1 + r12) cT X2AT (A X2AT ) 1b+ bT y = cT x: (14) 4.2 Recovering centrality The pair of points (~ x( ); ~ s( )) is feasible for small postive values of , but, in general, not centered. To see this, we compute the vector ~ e ~ x( )~ s( ) = 0BBB@ e ( s+ s)( x + x) 1 2 (1 )2 2 1 2 (1 )2 2 1CCCA ; where ~ e is the vector of all ones in IRm+2. If is small, then the mrst components of the above vector are close to zero, while the last two are close to 1. Hence, k~ e ~ x( )~ s( )k is close to p2, and one cannot assert in general that there exists a positive such that a proximity measure is reduced to a quantity less than one. This also appears unreachable with the use of weighted potentials, and their associated proximity measures. However we shall prove that the updated approximate center can be computed in O(1) iterations. We need to prove rst several technical lemmas. In all of them, we assume that s = AT y is de ned as in Theorem 4.1. Lemma 4.2 Let h = x s and g = s 1 s. Then, 1. khk 1 and eTh = 0. 2. kgk 1 and jeT gj . Proof Next, let h = x s. By construction, k x sk = 1 : Besides, eTh = eT ( x s) = xT (AT y) = 0: >From the de nition of g, kgk = ( s x) 1 x s ( s x) 1 1 khk 1 1 = 1: 11 Denote = e s x. From s 1 = s 1 x and A x = 0, we get A S 1 = A s 1 = A S 1e, or T S 1AT = eT S 1AT : >From the de nition of s in Theorem 4.1 eT S 1 s = 1 p2(1 + r12)eT S 1AT (A X2AT ) 1b; = 1 p2(1 + r12) T S 1AT (A X2AT ) 1b: Thus jeT S 1 sj 1 p2(1 + r12) k k S 1 X 1 XAT (A X2AT ) 1b ; 1 p2(1 + r12) ( X S) 1 1qbT (A X2AT ) 1b; : The last inequality follows from ( x s) 1 1 1 1 and from qbT (A X2AT ) 1b =p2(1 + r12): Lemma 4.3 Let h and g be de ned as in Lemma 4.2. For all 0 < < 1 1. m P i=1 log(1 hi) (1 ) + log(1 (1 )). 2. m P i=1 log(1 + gi) + + log(1 ): Proof Clearly, khk = (1 ) < 1. We can use the well-known inequality on the logarithmic function to yield m Xi=1 log(1 xi si) eTh+ khk+ log(1 khk); = khk+ log(1 khk); = (1 ) + log(1 (1 )): 12 To prove the second statement, we note again that kgk = < 1. Since kgk 1, we have for < 1 m Xi=1 log(1 + gi) eT g + kgk+ log(1 kgk): Since t+ log(1 t), t < 1, is a decreasing function of t, we can use Lemma 4.2 and kgk 1 to bound the right-hand side of the above inequality. We get m Xi=1 log(1 + gi) + + log(1 ): Theorem 4.4 The number of Newton steps to compute the updated -analytic center is bounded by = 2 1 = O(1); where = 2 log 2(1 )2 2 + 2 (1 ) + log [(1 )(1 (1 ))] 2 2 1 2 : and 1 = log(1 + ): Proof To bound the number of Newton steps, we compute the optimality gap ~ 'PD = (~ 'cP + ~ 'cD) ( ~ 'P (~ x( )) + ~ 'D(~ s( ))) for the sum of the primal and dual potentials. On the one hand, ~ 'cP + ~ 'cD = (m+ 2): On the other hand, we can write ~ 'P (~ x( )) + ~ 'D(~ s( )) = ~ cT ~ x( ) + m Xi=1 (log( xi + xi) + log( si + si)) + 2 Xj=1 logxm+j( )sm+j( ); = cT x+ 2 log 2(1 )2 2 + m Xi=1 (log( xi + xi) + log( si + si)) : 13 Using Lemma 4.3 and x 1 x = x s = h, we have cT x+ m Xi=1 log( xi + xi) = 'P ( x) + m Xi=1 log(1 + hi); 'P ( x) + (1 ) + log(1 (1 )): For the dual variables we have m Xi=1 log( si + si) = 'D( s) + m Xi=1 log(1 + gi): Recall that by Lemma 4.3 m Xi=1 log(1 + gi) + + log(1 ): Consequently, ~ 'P (~ x( )) + ~ 'D(~ s( )) 'P ( x) + 'D( s) + 2 log 2(1 )2 2 +2 (1 ) + log(1 (1 )) + log(1 ): In view of Corollary 3.2 and Lemma 3.5, the primal and dual potentials at ( x; s) are bounded below by 'P ( x) + 'D( s) 'cP + 'cD 2 2 1 2 = m 2 2 1 2 : Denoting = 2 log 2(1 )2 2 + 2 (1 ) + log(1 (1 )) + log(1 ) 2 2 1 2 ; we provide the bound on the primal-dual potential ~ 'PD (m+ 2) +m = 2 : Assume now that we have computed some suitable . To compute the next center we use the primal Newton algorithm with damped steps and initial point ~ x( ). As long as ~ x is not centered, one has kp(~ x)k > : a primal step increases the primal potential by at least 1. After such steps the primal-dual potential satis es ~ 'PD(~ x( ); ~ s( )) + 1 ~ 'PD(~ x ; ~ s( )) ~ 'PD(~ xc; ~ sc): 14 Thus 2 1 = O(1): At this stage, we should note that the primal-dual potential is separable. Its maximization boils down to two independent maximizations on the primal and on the dual. In view of this, one can perform independent linesearches along the primal and the dual directions. It would yield an optimal pair (~ x( P ); ~ s( D)) to start computations in search of the next analytic center. Since the primal and the dual components are both interior feasible, one can use either a primal, or a dual or a primal-dual interior point method. The above analysis breaks down if the two cuts are deep (or one deep and one shallow). Then ~ x( ) remains a valid starting point for the primal algorithm, while dual feasiblity may not be recovered by a linesearch along the dual direction s. This justi es the use of the primal algorithm in the case of multiple very deep cuts. 5 Potential adjustment Theorem 5.1 ~ 'D(~ sc) 'D(sc) + log 1 2(1 + r12) ; where = 2 log 2 2 1 2 + (1 ) + log (1 (1 )) + 2 log ( (1 )) : Proof The proof goes by putting together three inequalities. The rst inequality uses ~ 'cP ~ 'P (~ x( )) and the duality on potential (4) to yield ~ 'cD = (m+ 2) + ~ 'cP (m+ 2) + ~ 'P (~ x( )): (15) The second inequality starts with the de nition of the potential ~ 'P (~ x( )) = ~ cT ~ x( ) + m+2 Xi=1 log ~ xi( ): (16) Let us compute a bound for the right-hand side. First, recall that ~ cT ~ x( ) = cT x: (17) 15 Next, by Lemma 4.3 m Xi=1 log(1 hi) (1 ) + log(1 (1 )): (18) Finally, log(xm+1( )xm+2( )) = 2 log ( (1 )) log (2 1 2(1 + r12)) : (19) Inserting (17){(19) into (16) yields the second main inequality ~ 'P (~ x( )) 'P ( x) + (1 ) + log (1 (1 )) + 2 log ( (1 )) log (2 1 2(1 + r12)) : (20) The third inequality uses Corollary 3.2. We have 'P ( x) 'cP 2 1 2 = 'cD m 2 1 2 : (21) Putting together (15), (20) and (21) yields ~ 'cD 'cD + 2 1 2 2 (1 ) log (1 (1 )) 2 log ( (1 )) + log (2 1 2(1 + r12)) : (22) To minimize the upper bound on the potential at the new point, the step size is chosen so as to maximize , that is, = p3 1 1 : (23) We denote  the optimal value of . 6 ACCPM: statement and convergence ACCPM can be shortly stated as follows. Initialization Let F0 D = fy 0 : y eg be the unit cube and y0 = 1 2e be its center. The centering parameter is 0 < < 1. Basic Step yk is a -approximate center of Fk D. 1. The oracle returns the cuts a2k and a2k+1 at yk. 16 2. Update Fk+1 D = Fk D \ fy : aT2k+j(y yk) 0; j = 1; 2g. 3. Compute the re-entry direction according to Theorem 4.1. 4. Use the primal Newton algorithm to compute yk+1 as a -center of Fk+1 D . The convergence analysis is a minor modi cation of the proof in [5] and [19]. Denote P = 'D(sc) = max f'D(s) : s 2 FDg and let P k be the same value after k calls to the oracle, that is, after adding 2k cuts. By Theorem 5.1 the following inequality holds P k P 0 + k Xj=0 2 Xi=1 log j i k  + k Xj=0 log(1 + rj 12): The last term in the right-hand side is bounded above by k log 2. One could simply incorporate this term in k  and conduct the complexity analysis without any further concern. However, we choose to leave out this term to emphasize the positive impact of two cuts with an acute angle in Dikin's metric, i.e., with 1 < rj 12 < 0. Theorem 10 of [19] can be used here in the particular case of two cuts, and gives: Theorem 6.1 The algorithm stops with a solution as soon as k satis es: log " 1 2 log n2 + 18n2 15 log(1 + 2n+2k+2 8n2 ) 2n+ 2k + 2 ; (k + 1) k P j=0 log(1 + rj 12) 2n+ 2k + 2 : Furthermore the number of damped Newton steps per iteration is O(1) and thus both the number of iterations and the number of Newton steps is O (n2 "2 ). Proof The proof of Theorem 10 of [19] shows that: P 0 + k P j=0 2 P i=1 log j i 2n+ 2k + 2 12 log n2 + 18n2 15 log(1 + 2n+2k+2 8n2 ) 2n+ 2k + 2 : If the algorithm has not stopped after k + 1 iterations with a feasible solution, then P k+1 (2n+ 2k + 2) log "; 17 thus,(2n+ 2k + 2) log "P k+1;P 0 + kXj=0 2Xi=1 log ji (k + 1)+ kXj=0 log(1 + rj12);2n+ 2k + 22log n2 + 18n215 log(1 + 2n+2k+28n2 )2n+ 2k + 2(k + 1)+ kXj=0 log(1 + rj12):7 ConclusionWe conclude with a brief discussion about possible extensions.In this paper, we de ned an e cient direction to restore feasibility and centralityafter adding two new central cuts simultaneously. The direction is e cient inthe sense that it maximizes the two new terms brought into the dual potential,namelyP2j=1 log ~sm+j , under the constraints that the other variables remainswithin the Dikin ellipsoid. As one could expect it, the analysis showed that themore acute the angle between the two cuts is, the lesser will be the growth ofthe potential between the old and the new analytic centers. Thus acute cutsare likely to speed up convergence.One could conceivably try to extend this approach to multiple cuts. Then onewould have to maximize a nonlinear functionPpj=1 log ~sm+j with more than 2terms, p > 2, under a single quadratic constraint. One cannot give a closedform solution when p > 2; but it is still possible to compute the solution via aninterior point method.An alternative to the computation of the optimal direction would be to takea simple aggregation of the new cuts and compute a re-entry direction withrespect to this single representative of the cuts. The search direction may notyield a feasible point in the dual, but will still exhibit a primal feasible point tostart iterate in the primal space. This is the approach followed by Ye [19]; thisis also the one that is implemented in [9]. Of course, this approach does notexploit the relative position of the cuts to construct a good estimate of the newanalytic center.The other extension concerns the depth of the cuts. In our presentation, we onlyconsidered central cuts. In practice, the oracle almost always produces deep, ifnot very deep, cuts. Putting those deep cuts in a central position may induce18 a loss of valuable information on the set of localization. Nevertheless, we couldstill use the search direction of Section 4 to construct an interior primal feasiblepoint. Then, a long step analysis as in [8] will still guarantee convergence inO (n2"2 ) calls to the oracle, but with slightly worse bound on the total numberof Newton iterations O (n2"2 log1" ).References[1] D. S. Atkinson and P. M. Vaidya (1995), \A cutting plane algorithm thatuses analytic centers", Mathematical Programming, series B, 69, 1{43.[2] O. Bahn, O. du Merle, J.-L. Go n and J.P. Vial (1995), \A Cutting PlaneMethod from Analytic Centers for Stochastic Programming", Mathemat-ical Programming, Series B, 69, 45{73.[3] J.-L. Go n, J. Gondzio, R. Sarkissian and J.-P. Vial (1997), \Solving Non-linear Multicommodity Flows Problems by the Analytic Center CuttingPlane Method", Mathematical Programming, Series B, vol 76 1, 131{154.[4] J.-L. Go n, A. Haurie and J.-P. Vial (1992), \Decomposition and non-di erentiable optimization with the projective algorithm", ManagementScience 38, 284{302.[5] J.-L. Go n, Z.-Q. Luo and Y. Ye (1996), \Complexity analysis of aninterior cutting plane for convex feasibility problems", SIAM Journal onOptimization, 6, 638{652.[6] J.-L. Go n and F. Shari Mokhtarian , "Using the Primal Dual InfeasibleNewton Method in the Analytic Center Method for Problems De ned byDeep Cutting Planes", Cahier du GERAD G-94-41, ISSN: 0711-2440, 24pp, September 1994, revised March 1998; to appear in Journal of Opti-mization Theory and Applications.[7] J.-L. Go n and J.-P. Vial (1993),\On the Computation of Weighted An-alytic Centers and Dual Ellipsoids with the Projective Algorithm", Math-ematical Programming 60, 81-92.[8] J.-L. Go n and J.-P. Vial (1996),\Shallow, deep and very deep cuts inthe analytic center cutting plane method", Logilab Technical Report 96-3,Department of Management Studies, University of Geneva, Switzerland.Revised June 1997. To appear in Mathematical Programming.[9] Gondzio J., O. du Merle, R. Sarkissian and J.-P. Vial (1996), \ACCPM -A Library for Convex Optimization Based on an Analytic Center CuttingPlane Method", European Journal of Operational Research, 94, 206{211.[10] N.K. Karmarkar (1984), \A New Polynomial{Time Algorithm for LinearProgramming", Combinatorica 4, 373{395.19 [11] Z.-Q. Luo (1994),\Analysis of a Cutting Plane Method That UsesWeighted Analytic Center and Multiple Cuts", SIAM Journal on Op-timization 4, 697{716.[12] J. E. Mitchell and M.J. Todd (1992), \Solving combinatorial optimizationproblems using Karmarkar's algorithm," Mathematical Programming 56,245{284.[13] Y. Nesterov (1995), \Cutting plane algorithms from analytic centers: ef-ciency estimates", Mathematical Programming, series B, 69, 149{176.[14] Yu. Nesterov and A. Nemirovsky. Interior Point Polynomial Algorithmsin Convex Programming : Theory and Applications, SIAM, Philadelphia,1994.[15] S. Ramaswamy and J. E. Mitchell (1994), \On Updating the AnalyticCenter after the Addition of Multiple Cuts," Technical Report 37-94-423,DSES, Rensselaer Polytechnic Institute, Troy, NY 12180.[16] C. Roos, T. Terlaky and J.-P. Vial (1997), Theory and Algorithms forLinear Optimization: An Interior point Approach . John Wiley & Sons,Chichester.[17] C. Roos and J.-P. Vial (1990), "Long steps with the logarithmic bar-rier function in linear programming", in: Economic Decision Making:Games, Econometrics and Optimization, J. Gabszewicz, J.-F. Richard andL. Wolsey ed., Elsevier Science Publishers B. V., pp 433-441.[18] Y. Ye (1992), \A potential reduction algorithm allowing column genera-tion", SIAM Journal on Optimization 2, 7{20.[19] Y. Ye (1997), \Complexity Analysis of the Analytic Center Cutting PlaneMethod That Uses Multiple Cuts", Mathematical Programming 78, 85{104.[20] Y. Ye (1997), Interior Point Algorithms: Theory and Analysis . JohnWiley & Sons, New York.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Analytic Center Cutting Plane Method with Semidefinite Cuts

We analyze an analytic center cutting plane algorithm for the convex feasibility problems with semideenite cuts. At each iteration the oracle returns a p-dimensional semideenite cut at an approximate analytic center of the set of localization. The set of localization, which contains the solution set, is a compact set consists of piecewise algebraic surfaces. We prove that the analytic center is...

متن کامل

An Analytic Center Cutting Plane Method for Semideenite Feasibility Problems

Semideenite feasibility problems arise in many areas of operations research. The abstract form of these problems can be described as nding a point in a nonempty bounded convex body ? in the cone of symmetric positive semideenite matrices. Assume that ? is deened by an oracle, which for any given m m symmetric positive semideenite matrix ^ Y either connrms that ^ Y 2 ? or returns a cut, i.e., a ...

متن کامل

An Interior Point Method for Solving Semidefinite Programs Using Cutting Planes and Weighted Analytic Centers

We investigate solving semidefinite programs SDPs with an interior point method called SDPCUT, which utilizes weighted analytic centers and cutting plane constraints. SDP-CUT iteratively refines the feasible region to achieve the optimal solution. The algorithm uses Newton’s method to compute the weighted analytic center. We investigate different stepsize determining techniques. We found that u...

متن کامل

Localization and Cutting-Plane Methods

4 Some specific cutting-plane methods 12 4.1 Bisection method on R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.2 Center of gravity method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 4.3 MVE cutting-plane method . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 4.4 Chebyshev center cutting-plane method . . . . . . . . . . . . . . . . . . . . . 16 4.5 ...

متن کامل

A Multiple-Cut Analytic Center Cutting Plane Method for Semidefinite Feasibility Problems

We consider the problem of finding a point in a nonempty bounded convex body Γ in the cone of symmetric positive semidefinite matrices S + . Assume that Γ is defined by a separating oracle, which, for any given m×m symmetric matrix Ŷ , either confirms that Ŷ ∈ Γ or returns several selected cuts, i.e., a number of symmetric matrices Ai, i = 1, ..., p, p ≤ pmax, such that Γ is in the polyhedron {...

متن کامل

An Analytic Center Cutting Plane Method

Semideenite feasibility problems arise in many areas of operations research. The abstract form of these problems can be described as nding a point in a nonempty bounded convex body ? in the cone of symmetric positive semideenite matrices. Assume that ? is deened by an oracle, which, for any given m m symmetric matrix ^ Y , either connrms that ^ Y 2 ? or returns a cut, i.e., a symmetric matrix A...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Math. Meth. of OR

دوره 49  شماره 

صفحات  -

تاریخ انتشار 1999